15 research outputs found
Structural Attention Neural Networks for improved sentiment analysis
We introduce a tree-structured attention neural network for sentences and
small phrases and apply it to the problem of sentiment classification. Our
model expands the current recursive models by incorporating structural
information around a node of a syntactic tree using both bottom-up and top-down
information propagation. Also, the model utilizes structural attention to
identify the most salient representations during the construction of the
syntactic tree. To our knowledge, the proposed models achieve state of the art
performance on the Stanford Sentiment Treebank dataset.Comment: Submitted to EACL2017 for revie
Learning monocular 3D reconstruction of articulated categories from motion
Monocular 3D reconstruction of articulated object categories is challenging
due to the lack of training data and the inherent ill-posedness of the problem.
In this work we use video self-supervision, forcing the consistency of
consecutive 3D reconstructions by a motion-based cycle loss. This largely
improves both optimization-based and learning-based 3D mesh reconstruction. We
further introduce an interpretable model of 3D template deformations that
controls a 3D surface through the displacement of a small number of local,
learnable handles. We formulate this operation as a structured layer relying on
mesh-laplacian regularization and show that it can be trained in an end-to-end
manner. We finally introduce a per-sample numerical optimisation approach that
jointly optimises over mesh displacements and cameras within a video, boosting
accuracy both for training and also as test time post-processing. While relying
exclusively on a small set of videos collected per category for supervision, we
obtain state-of-the-art reconstructions with diverse shapes, viewpoints and
textures for multiple articulated object categories.Comment: For project website see
https://fkokkinos.github.io/video_3d_reconstruction
To The Point: Correspondence-driven monocular 3D category reconstruction
We present To The Point (TTP), a method for reconstructing 3D objects from a single image using 2D to 3D correspondences learned from weak supervision. We recover a 3D shape from a 2D image by first regressing the 2D positions corresponding to the 3D template vertices and then jointly estimating a rigid camera transform and non-rigid template deformation that optimally explain the 2D positions through the 3D shape projection. By relying on 3D-2D correspondences we use a simple per-sample optimization problem to replace CNN-based regression of camera pose and non-rigid deformation and thereby obtain substantially more accurate 3D reconstructions. We treat this optimization as a differentiable layer and train the whole system in an end-to-end manner. We report systematic quantitative improvements on multiple categories and provide qualitative results comprising diverse shape, pose and texture prediction examples. Project website: https://fkokkinos.github.io/to_the_point
Deep Structured Layers for Instance-Level Optimization in 2D and 3D Vision
The approach we present in this thesis is that of integrating optimization problems
as layers in deep neural networks. Optimization-based modeling provides an additional set of tools enabling the design of powerful neural networks for a wide
battery of computer vision tasks. This thesis shows formulations and experiments
for vision tasks ranging from image reconstruction to 3D reconstruction.
We first propose an unrolled optimization method with implicit regularization
properties for reconstructing images from noisy camera readings. The method resembles an unrolled majorization minimization framework with convolutional neural networks acting as regularizers. We report state-of-the-art performance in image
reconstruction on both noisy and noise-free evaluation setups across many datasets.
We further focus on the task of monocular 3D reconstruction of articulated objects using video self-supervision. The proposed method uses a structured layer for
accurate object deformation that controls a 3D surface by displacing a small number
of learnable handles. While relying on a small set of training data per category for
self-supervision, the method obtains state-of-the-art reconstruction accuracy with
diverse shapes and viewpoints for multiple articulated objects.
We finally address the shortcomings of the previous method that revolve
around regressing the camera pose using multiple hypotheses. We propose a method
that recovers a 3D shape from a 2D image by relying solely on 3D-2D correspondences regressed from a convolutional neural network. These correspondences are
used in conjunction with an optimization problem to estimate per sample the camera pose and deformation. We quantitatively show the effectiveness of the proposed
method on self-supervised 3D reconstruction on multiple categories without the need for multiple hypotheses
Replay: Multi-modal Multi-view Acted Videos for Casual Holography
We introduce Replay, a collection of multi-view, multi-modal videos of humans
interacting socially. Each scene is filmed in high production quality, from
different viewpoints with several static cameras, as well as wearable action
cameras, and recorded with a large array of microphones at different positions
in the room. Overall, the dataset contains over 4000 minutes of footage and
over 7 million timestamped high-resolution frames annotated with camera poses
and partially with foreground masks. The Replay dataset has many potential
applications, such as novel-view synthesis, 3D reconstruction, novel-view
acoustic synthesis, human body and face analysis, and training generative
models. We provide a benchmark for training and evaluating novel-view
synthesis, with two scenarios of different difficulty. Finally, we evaluate
several baseline state-of-the-art methods on the new benchmark.Comment: Accepted for ICCV 2023. Roman, Yanir, and Ignacio contributed equall